111 research outputs found

    Parameterized Analysis of Online Steiner Tree Problems

    Get PDF
    Steiner tree problems occupy a central place in both areas of approximation and on-line algorithms. Many variants have been studied from the point of view of competitive analysis, and for several of these variants tight bounds are known. However, in several cases, worst-case analysis is overly pessimistic, which fails to explain the relative performance of algorithms. We show how adaptive analysis can help resolve this problem. As case studies, we consider the Steiner tree problem in directed graphs, and the Priority Steiner tree problem

    Competitive Search in the Line and the Star with Predictions

    Get PDF

    Online Search with a Hint

    Get PDF
    The linear search problem, informally known as the cow path problem, is one of the fundamental problems in search theory. In this problem, an immobile target is hidden at some unknown position on an unbounded line, and a mobile searcher, initially positioned at some specific point of the line called the root, must traverse the line so as to locate the target. The objective is to minimize the worst-case ratio of the distance traversed by the searcher to the distance of the target from the root, which is known as the competitive ratio of the search. In this work we study this problem in a setting in which the searcher has a hint concerning the target. We consider three settings in regards to the nature of the hint: i) the hint suggests the exact position of the target on the line; ii) the hint suggests the direction of the optimal search (i.e., to the left or the right of the root); and iii) the hint is a general k-bit string that encodes some information concerning the target. Our objective is to study the Pareto-efficiency of strategies in this model. Namely, we seek optimal, or near-optimal tradeoffs between the searcher's performance if the hint is correct (i.e., provided by a trusted source) and if the hint is incorrect (i.e., provided by an adversary)

    Online Computation with Untrusted Advice

    Get PDF
    The advice model of online computation captures the setting in which the online algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, unlike real life in which advice is a recommendation that we can choose to follow or to ignore based on trustworthiness, in the current advice model, the online algorithm treats it as infallible. This means that if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well- studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model
    • …
    corecore